Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

نویسندگان

  • Mike Wu
  • Michael C. Hughes
  • Sonali Parbhoo
  • Maurizio Zazzi
  • Volker Roth
  • Finale Doshi-Velez
چکیده

The lack of interpretability remains a key barrier to the adoption of deep models in many applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their classprobability predictions have high accuracy while being closely modeled by decision trees with few nodes. Using intuitive toy examples as well as medical tasks for treating sepsis and HIV, we demonstrate that this new tree regularization yields models that are easier for humans to simulate than simpler L1 or L2 penalties without sacrificing predictive power.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generalized Sparse Regularization with Application to fMRI Brain Decoding

Many current medical image analysis problems involve learning thousands or even millions of model parameters from extremely few samples. Employing sparse models provides an effective means for handling the curse of dimensionality, but other propitious properties beyond sparsity are typically not modeled. In this paper, we propose a simple approach, generalized sparse regularization (GSR), for i...

متن کامل

Supersparse Linear Integer Models for Predictive Scoring Systems

Scoring systems are classification models that make predictions using a sparse linear combination of variables with integer coefficients. Such systems are frequently used in medicine because they are interpretable; that is, they only require users to add, subtract and multiply a few meaningful numbers in order to make a prediction. See, for instance, these commonly used scoring systems: (Gage e...

متن کامل

Structured Sparsity with Group-Graph Regularization

In many learning tasks with structural properties, structural sparsity methods help induce sparse models, usually leading to better interpretability and higher generalization performance. One popular approach is to use group sparsity regularization that enforces sparsity on the clustered groups of features, while another popular approach is to adopt graph sparsity regularization that considers ...

متن کامل

Matrix Computations & Scientific Computing Seminar

Extracting useful information from high-dimensional data is the focus of today’s statistical research and practice. After broad success of statistical machine learning on prediction through regularization, interpretability is gaining attention and sparsity has been used as its proxy. With the virtues of both regularization and sparsity, Lasso (L1 penalized L2 minimization) and its extensions ha...

متن کامل

Structured Sparsity in Natural Language Processing: Models, Algorithms and Applications

This tutorial will cover recent advances in sparse modeling with diverse applications in natural language processing (NLP). A sparse model is one that uses a relatively small number of features to map an input to an output, such as a label sequence or parse tree. The advantages of sparsity are, among others, compactness and interpretability; in fact, sparsity is currently a major theme in stati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1711.06178  شماره 

صفحات  -

تاریخ انتشار 2017